You are viewing a preview of this job. Log in or register to view more details about this job.

Research Engineer Generative-AI, open source LLM and Data Science models - Intern

NIO

Overview:

NIO Inc. is a pioneer in the premium smart electric vehicle market. Founded in November 2014, NIO’s mission is to shape a joyful lifestyle, and build a community starting with smart electric vehicles to grow together with users.

NIO designs, develops, jointly manufactures and sells premium smart electric vehicles, driving innovation in next-generation technologies in autonomous driving, digital technologies, electric powertrains and batteries. NIO differentiates itself through its continuous technological breakthroughs and innovations, such as its industry-leading battery swapping technologies, Battery as a Service (BaaS), as well as its proprietary autonomous driving technologies and Autonomous Driving as a Service (ADaaS). 

NIO’s product portfolio consists of the ES8, a six-seater smart electric flagship SUV, the ES7 (or the EL7), a mid-large five-seater smart electric SUV, the ES6 (or the EL6), a five-seater all-round smart electric SUV, the EC7, a five-seater smart electric flagship coupe SUV, the EC6, a five-seater smart electric coupe SUV, the ET9, a smart electric executive flagship, the ET7, a smart electric flagship sedan, the ET5, a mid-size smart electric sedan, and the ET5T, a smart electric tourer.

Responsibilities:

• Using one of the open source LLM models, you will research and apply the latest and state-of-the-art techniques to create and POC an AI-based application using proprietary data by training, fine-tuning, and augmenting the base model.

Qualifications:

• PhD or master’s degree with publication and research projects in Computer Science, Computer Engineering, Applied Mathematics, Data Science.

• Experience with open source LLM models such as lama 2, Mistral, Claude 2, Grok-1

• Experience fine-tuning and merging LLM models with LoRA, PEFT, Quantization, Tokenization

• Proficient in Python, AI-related training and inferencing tools such as PyTorch, vLLM, Ray

• Knowledge on prompting techniques such as Iterative refinement, Feedback loops, Zeroshot, Few-shot, CoT

• Knowledge on different model evaluation tasks such as HellaSwag, TruthfulQA, MMLU

• Strong understanding of natural language processing, machine learning, and AI- generated content development